Goto

Collaborating Authors

 rts game


ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games

Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick

Neural Information Processing Systems

In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a miniature version of StarCraft, captures key game dynamics and runs at 40K frame-per-second (FPS) per core on a laptop. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs end-to-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong performance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies.


ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games

Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick

Neural Information Processing Systems

In this paper, we propose ELF, an Extensive, Lightweight and Flexible platform for fundamental reinforcement learning research. Using ELF, we implement a highly customizable real-time strategy (RTS) engine with three game environments (Mini-RTS, Capture the Flag and Tower Defense). Mini-RTS, as a miniature version of StarCraft, captures key game dynamics and runs at 40K frameper-second (FPS) per core on a laptop. When coupled with modern reinforcement learning methods, the system can train a full-game bot against built-in AIs endto-end in one day with 6 CPUs and 1 GPU. In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like ALE [4]. Using ELF, we thoroughly explore training parameters and show that a network with Leaky ReLU [17] and Batch Normalization [11] coupled with long-horizon training and progressive curriculum beats the rule-based built-in AI more than 70% of the time in the full game of Mini-RTS. Strong performance is also achieved on the other two games. In game replays, we show our agents learn interesting strategies.


Generating Real-Time Strategy Game Units Using Search-Based Procedural Content Generation and Monte Carlo Tree Search

Sorochan, Kynan, Guzdial, Matthew

arXiv.org Artificial Intelligence

Real-Time Strategy (RTS) game unit generation is an unexplored area of Procedural Content Generation (PCG) research, which leaves the question of how to automatically generate interesting and balanced units unanswered. Creating unique and balanced units can be a difficult task when designing an RTS game, even for humans. Having an automated method of designing units could help developers speed up the creation process as well as find new ideas. In this work we propose a method of generating balanced and useful RTS units. We draw on Search-Based PCG and a fitness function based on Monte Carlo Tree Search (MCTS). We present ten units generated by our system designed to be used in the game microRTS, as well as results demonstrating that these units are unique, useful, and balanced.


Asymmetric Action Abstractions for Planning in Real-Time Strategy Games

Moraes, Rubens O. | Nascimento, Mario A. | Lelis, Levi H.S. (a:1:{s:5:"en_US";s:21:"University of Alberta";})

Journal of Artificial Intelligence Research

Action abstractions restrict the number of legal actions available for real-time planning in zero-sum extensive-form games, thus allowing algorithms to focus their search on a set of promising actions. Even though unabstracted game trees can lead to optimal policies, due to real-time constraints and the tree size, they are not a practical choice. In this context, we introduce an action abstraction scheme which we call asymmetric action abstraction. Asymmetric abstractions allow search algorithms to "pay more attention" to some aspects of the game by unevenly dividing the algorithm's search effort amongst different aspects of the game. We also introduce four algorithms that search in asymmetrically abstracted game trees to evaluate the effectiveness of our abstraction schemes. Two of our algorithms are adaptations of algorithms developed for searching in action-abstracted spaces, Portfolio Greedy Search and Stratified Strategy Selection, and the other two are adaptations of an algorithm developed for searching in unabstracted spaces, NaïveMCTS. An extensive set of experiments in a real-time strategy game shows that search algorithms using asymmetric abstractions are able to outperform all other search algorithms tested.


Ontanon

AAAI Conferences

Real-time strategy (RTS) games are hard from an AI point of view because they have enormous state spaces, combinatorial branching factors, allow simultaneous and durative actions, and players have very little time to choose actions. For these reasons, standard game tree search methods such as alpha- beta search or Monte Carlo Tree Search (MCTS) are not sufficient by themselves to handle these games. This paper presents an alternative approach called Adversarial Hierarchical Task Network (AHTN) planning that combines ideas from game tree search with HTN planning. We present the basic algorithm, relate it to existing adversarial hierarchical planning methods, and present new extensions for simultaneous and durative actions to handle RTS games.


Robertson

AAAI Conferences

The main objective of this research is to increase the quality of AI used in commercial RTS games, which has seen little improvement over the past decade. This objective will be addressed by investigating the use of a learning by observation, case-based reasoning agent, which can be applied to new RTS games with minimal development effort. To be successful, this agent must compare favourably with standard commercial RTS AI techniques: it must be easier to apply, have reasonable resource requirements, and produce a better player. Currently, a prototype implementation has been produced for the game StarCraft, and it has demonstrated the need for processing large sets of input data into a more concise form for use at run-time.


Uriarte

AAAI Conferences

From an AI point of view, Real-Time Strategy (RTS) games are hard because they have enormous state spaces, they are real-time and partially observable. In this paper, we present an approach to deploy game-tree search in RTS games by using game state abstraction. We propose a high-level abstract representation of the game state, that significantly reduces the branching factor when used for game-tree search algorithms. Using this high-level representation, we evaluate versions of alpha-beta search and of Monte Carlo Tree Search (MCTS).


Schneider

AAAI Conferences

Real-time strategy (RTS) games pose challenges to AI research on many levels, ranging from selecting targets in unit combat situations, over efficient multi-unit pathfinding, to high-level economic decisions. Due to the complexity of RTS games, writing competitive AI systems for these games requires high speed adaptive algorithms and simplified models of the game world. In this paper we focus on motion prediction and motion planning in StarCraft -- a popular RTS game for which a C API exists that allows us to write AI systems to play the game. We explore our existing unit motion model of StarCraft and find and fix some inconsistencies to improve the model by accounting for systematic command execution delays and unit acceleration. We then investigate ways to improve existing combat motion planning systems that are based on discrete unit motion sets, and show that search-based algorithms and scripts can benefit from using a new direction set that considers moves towards the closest enemy unit, away from it, and perpendicular to both directions.


MOBAs and the Future of AI Research

#artificialintelligence

In previous articles, I've looked at a variety of video games that have proven useful test-beds for AI research, with the likes of Ms. Pac-Man, Super Mario Bros. and more recently StarCraft. But in this instance I want to look at a genre that is still relatively new whilst presenting exciting opportunities for AI research: Multiplayer Online Battle Arena's (MOBA). The MOBA genre is undoubtedly one of the most popular in gaming today, but what impact could this have upon AI research? I'm going to provide an overview of MOBA's as a genre, what aspects of their design can prove interesting to AI research and look at some projects that are now bearing fruit both in academia and in corporate research labs. Multiplayer Online Battle Arena's are an offshoot of Real-time Strategy (RTS) games, originating with the Aeon of Strife map for Blizzards StarCraft, followed by the'Defence of the Ancients' mod for WarCraft III: Reign of Chaos and its expansion The Frozen Throne.


Explaining Reinforcement Learning to Mere Mortals: An Empirical Study

Anderson, Andrew, Dodge, Jonathan, Sadarangani, Amrita, Juozapaitis, Zoe, Newman, Evan, Irvine, Jed, Chattopadhyay, Souti, Fern, Alan, Burnett, Margaret

arXiv.org Artificial Intelligence

We present a user study to investigate the impact of explanations on non-experts' understanding of reinforcement learning (RL) agents. We investigate both a common RL visualization, saliency maps (the focus of attention), and a more recent explanation type, reward-decomposition bars (predictions of future types of rewards). We designed a 124 participant, four-treatment experiment to compare participants' mental models of an RL agent in a simple Real-Time Strategy (RTS) game. Our results show that the combination of both saliency and reward bars were needed to achieve a statistically significant improvement in mental model score over the control. In addition, our qualitative analysis of the data reveals a number of effects for further study.